Current pre-trained language models have enabled remarkable improvements in downstream tasks, but it remains difficult to distinguish effects of statistical correlation from more systematic logical reasoning grounded on understanding of the real world. In this paper we tease these factors apart by leveraging counterfactual conditionals, which force language models to predict unusual consequences based on hypothetical propositions. We introduce a set of tests drawn from psycholinguistic experiments, as well as larger-scale controlled datasets, to probe counterfactual predictions from a variety of popular pre-trained language models. We find that models are consistently able to override real-world knowledge in counterfactual scenarios, and that this effect is more robust in case of stronger baseline world knowledge -- however, we also find that for most models this effect appears largely to be driven by simple lexical cues. When we mitigate effects of both world knowledge and lexical cues to test knowledge of linguistic nuances of counterfactuals, we find that only GPT-3 shows sensitivity to these nuances, though this sensitivity is also non-trivially impacted by lexical associative factors.
translated by 谷歌翻译
虽然句子异常已经定期应用于NLP中的测试,但我们尚未建立从NLP模型中的表示中的异常信息的确切状态的图片。在本文中,我们的目标是填补两个主要间隙,重点关注句法异常的领域。首先,我们通过设计改变异常在句子中发生的分层级别的探测任务来探讨异常编码的细粒度差异。其次,我们不仅测试了模型能够通过检查不同异常类型之间的转移来检测给定异常的能力,还能检测给定的异常信号的一般性。结果表明,所有型号都编码一些支持异常检测的信息,但检测性能在异常之间变化,并且只有最近的变压器模型的唯一表示显示了异常知识的概括知识的迹象。随访分析支持这些模型在合法的句子奇迹上接受合法的概念,而粗糙的单词位置信息也可能是观察到的异常检测的贡献者。
translated by 谷歌翻译
每天,人类都会执行许多紧密相关的活动,这些活动涉及微妙的判别动作,例如穿衬衫与穿夹克,或握手与给高五个。道德视觉AI的活动识别可以为我们的日常生活模式提供见解,但是现有的活动识别数据集并不能捕捉到世界各地这些人类活动的巨大多样性。为了解决此限制,我们介绍Collector,这是一个免费的移动应用程序,以录制视频,同时注释同意主题的对象和活动。这个新的数据收集平台用于策划People(CAP)数据集的同意活动,这是全球人的第一个大规模,细粒度的活动数据集。 CAP数据集包含145万个日常生活的512个细粒子活动标签的视频片段,由33个国家 /地区的780名受试者收集。我们为该数据集提供活动分类和活动检测基准,并分析基线结果,以深入了解世界周围人如何进行共同活动。可以在visym.github.io/cap上使用数据集,基准,评估工具,公共排行榜和移动应用程序。
translated by 谷歌翻译
随着机器学习模型在自动驾驶汽车(AV)的运动预测系统上变得越来越普遍,至关重要的是,我们必须确保模型预测是安全可靠的。但是,详尽地收集和标记充分测试稀有和挑战性场景的长尾所需的数据是困难且昂贵的。在这项工作中,我们构建了一个新的基准测试,用于通过将扰动应用于现有数据来评估和改善模型鲁棒性。具体而言,我们进行了广泛的标签努力,以识别因果因素,或者在Waymo Open Motion数据集(WOMD)中以任何方式影响人类驾驶员行为的代理,我们使用这些标签来通过删除非carusal剂来扰动数据从现场。然后,我们在我们提出的基准上评估了一套各种最先进的深度学习模型体系结构,并发现所有模型在扰动下均显示出很大的变化。在非作业扰动下,我们观察到$ 25 $ - $ 38 \%$ $相对变化,而与原始相比。然后,我们研究以提高模型鲁棒性的技术,包括增加训练数据集的大小以及使用靶向数据增强,这些数据增加在整个培训过程中都放下了代理。我们计划提供因果代理标签作为womd的附加属性,并释放稳健性基准,以帮助社区建立更可靠和安全的深度学习模型,以进行运动预测。
translated by 谷歌翻译
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing selfdriving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world selfdriving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest cam-era+LiDAR dataset available based on our proposed geographical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
translated by 谷歌翻译